3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Generalized Zero-Shot Learning for Point Cloud Segmentation with Evidence-Based Dynamic Calibration

Add code
Sep 10, 2025
Viaarxiv icon

Enhancing 3D Medical Image Understanding with Pretraining Aided by 2D Multimodal Large Language Models

Add code
Sep 11, 2025
Viaarxiv icon

Point Linguist Model: Segment Any Object via Bridged Large 3D-Language Model

Add code
Sep 09, 2025
Viaarxiv icon

OmniMap: A General Mapping Framework Integrating Optics, Geometry, and Semantics

Add code
Sep 09, 2025
Viaarxiv icon

SGS-3D: High-Fidelity 3D Instance Segmentation via Reliable Semantic Mask Splitting and Growing

Add code
Sep 05, 2025
Viaarxiv icon

CoRe-GS: Coarse-to-Refined Gaussian Splatting with Semantic Object Focus

Add code
Sep 05, 2025
Viaarxiv icon

PointAD+: Learning Hierarchical Representations for Zero-shot 3D Anomaly Detection

Add code
Sep 03, 2025
Viaarxiv icon

MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting

Add code
Sep 04, 2025
Viaarxiv icon

Improved 3D Scene Stylization via Text-Guided Generative Image Editing with Region-Based Control

Add code
Sep 04, 2025
Viaarxiv icon

SeqVLM: Proposal-Guided Multi-View Sequences Reasoning via VLM for Zero-Shot 3D Visual Grounding

Add code
Aug 28, 2025
Viaarxiv icon